feat: concurrent multi-tool daemon support (#12)#22
feat: concurrent multi-tool daemon support (#12)#22don-petry wants to merge 4 commits intoJoaolfelicio:mainfrom
Conversation
Adds --tools option to run multiple providers concurrently in a single daemon process. Each provider watches in its own async task, feeding interactions into a shared asyncio.Queue for unified processing. Key changes: - TOOL_REGISTRY maps tool names to (provider_class, bootstrap_fn) pairs - _create_providers() bootstraps and instantiates multiple providers - run_daemon() creates watcher tasks per provider via asyncio.create_task - Dashboard shows all active tools in header - --tool still works for single-tool backward compat - --tools validates tool names and rejects unknowns Usage: context-scribe --tools gemini-cli,claude,copilot Closes Joaolfelicio#12 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Adds support for running multiple log providers concurrently within a single context-scribe daemon instance, using a shared evaluator + Memory Bank client and an interaction fan-in queue.
Changes:
- Introduces a
TOOL_REGISTRYand_create_providers()to bootstrap/instantiate providers from tool names. - Adds
--tools(CSV) CLI option and updates the daemon loop to run one watcher task per provider, feeding a sharedasyncio.Queue. - Adds new unit tests covering registry population and provider creation behavior.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 6 comments.
| File | Description |
|---|---|
| context_scribe/main.py | Adds registry/provider factory + concurrent watcher tasks + --tools CLI parsing and multi-tool dashboard labeling. |
| tests/test_multi_tool.py | Adds tests for TOOL_REGISTRY and _create_providers() behavior (single/multi/unknown/bootstrap). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- _create_providers() now raises ValueError for unknown tools - Queue bounded to maxsize=1000 to prevent memory growth - _watch_provider() catches StopIteration and exceptions gracefully - watcher_tasks initialized before try to prevent UnboundLocalError - --tools deduplicates and rejects empty input Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@Joaolfelicio - What do you think about this enhancement idea? |
- test_daemons.py: patch _create_providers instead of individual class names, since TOOL_REGISTRY captures class refs at import time - _watch_provider: catch KeyboardInterrupt from mock generators - Increase test wait timeout for queue-based async pipeline Found via full test suite run with Python 3.12 + mcp installed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Reject empty/whitespace --tools values by checking `tools_csv is not None` - Fail fast in run_daemon() when tools=[] instead of silent fallback - Add CLI unit tests for --tools deduplication, invalid, and empty input - Remove unused provider_class from test_daemons.py parametrize - Fix misleading docstring in test_multi_tool.py fixture Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
don-petry
left a comment
There was a problem hiding this comment.
Automated review — NEEDS HUMAN REVIEW
Risk: MEDIUM
Reviewed commit: d15609dabf906426a9c8a758429c6d322913d9f6
Council vote: security=MEDIUM · correctness=MEDIUM · maintainability=MEDIUM
Summary
The PR implements multi-tool concurrent daemon support (issue #12) cleanly — the TOOL_REGISTRY pattern, async queue fan-in, and 75 passing tests all look solid, and no security-sensitive surfaces were touched. However, the PR cannot be merged in its current state: the branch has conflicts with main (DIRTY merge state). Beyond the merge conflict, two correctness issues need attention: watcher tasks are cancelled but never awaited in the shutdown finally block (risk of use-after-close races), and generator cleanup relies on __del__ rather than explicit close(). Documentation also lags the implementation — CONTRIBUTING.md still describes the old if/elif tool-registration flow that the new TOOL_REGISTRY has replaced.
Linked issue analysis
Issue #12 — multi-tool concurrent daemon support: All acceptance criteria are addressed. The --tools CSV option is wired, TOOL_REGISTRY replaces the old per-tool dispatch, providers run as independent watcher tasks sharing a fan-in queue, and the shared evaluator + Memory Bank client are reused across tools. Tests cover registry population, provider creation (single/multi/unknown/bootstrap), and the daemon lifecycle.
Findings
MAJOR
- [major]
(branch-level)— Merge conflict (correctness · maintainability): PR is in CONFLICTING / DIRTY state againstmain. Must be rebased before it can land. - [major]
context_scribe/main.py:336— Async lifecycle (correctness): Watcher tasks are cancelled viatask.cancel()but never awaited in thefinallyblock.mcp_client.close()proceeds immediately, risking use-after-close errors and leavingCancelledErrorexceptions unobserved. Fix:await asyncio.gather(*tasks, return_exceptions=True)after cancellation. - [major]
CONTRIBUTING.md— Documentation drift (maintainability):CONTRIBUTING.mdstill instructs contributors to add tools via the oldif/eliflogic inrun_daemon. The newTOOL_REGISTRYregistration pattern replaces that workflow but is entirely undocumented.
MINOR
- [minor]
context_scribe/main.py:263— Resource leak (correctness): Generator cleanup forprovider.watch()relies on__del__after task cancellation. Python does not guarantee prompt__del__, which can leak watchdogObserverthreads. Fix: explicitly callwatch_iter.close()in the cancel/finally path. - [minor]
context_scribe/main.py:264— Deprecated API (correctness · maintainability):asyncio.get_event_loop()called inside async functions_watch_provider()and_loop(). On Python 3.10+ (this project targets 3.12), useasyncio.get_running_loop()from a running coroutine. - [minor]
context_scribe/main.py:242— Evaluator selection (correctness):_detect_evaluator(tool_names[0])uses only the first tool when multiple tools are provided.--tools gemini-cli,claudewould select a Gemini evaluator for Claude interactions, producing incorrect rule extraction. - [minor]
README.md— Documentation drift (maintainability): Usage section shows only--toolexamples; the new--toolsmulti-tool option is not mentioned. - [minor]
context_scribe/main.py— Missing type hints (maintainability):_create_providershas no return type annotation;_watch_provider'sproviderparameter has no type hint.CONTRIBUTING.mdrequires type hints for all function signatures.
INFO
- [info]
context_scribe/main.py—--toolsvalidated againstTOOL_REGISTRYkeys with dedup and empty-input rejection; no injection surface. (security) - [info]
context_scribe/main.py—asyncio.Queue(maxsize=1000)bounds memory growth from provider watchers. (security) - [info]
(PR comment)— Open collaborator question from don-petry (2026-04-05) has not been answered. (correctness) - [info]
tests/test_daemons.py—test_run_daemon_toolspatchesbootstrap_global_configeven though_create_providersis fully mocked, making the bootstrap patch a no-op. (correctness · maintainability) - [info]
context_scribe/main.py—from typing import Listused; prefer built-inlist[str]on Python 3.10+. (maintainability) - [info]
tests/test_multi_tool.py—test_create_providers_calls_bootstrapmutatesTOOL_REGISTRYdirectly instead ofpatch.dict; safer against unexpected exceptions. (maintainability) - [info]
context_scribe/main.py—_detect_evaluatoris called withtool_names[0]when multiple tools are provided — a comment explaining this choice would help future maintainers. (maintainability)
CI status
2 checks passing (Detect ecosystems, pip-audit), 4 skipped (dependabot, npm audit, govulncheck, cargo audit). No test-suite check appears in the rollup for this commit — branch is CONFLICTING / DIRTY against main, so merge-commit verification has not run. All 75 unit tests passed on the PR branch head per test logs.
Reviewed automatically by the don-petry PR-review council (security: opus 4.6 · correctness: sonnet 4.6 · maintainability: sonnet 4.6 · synthesis: sonnet 4.6). The marker on line 1 lets the agent detect new commits and re-review. Reply with @don-petry if you need a human.
Why
Users who work across multiple AI tools (Gemini, Claude, Copilot) must currently run separate daemon instances for each one. This is cumbersome to manage and wastes resources — each instance runs its own MCP client connection and evaluator. A single daemon that monitors all tools concurrently with shared infrastructure is simpler to operate and more resource-efficient.
Summary
--toolsCLI option to run multiple providers concurrently in a single daemon (e.g.--tools gemini-cli,claude,copilot)asyncio.create_task, feeding interactions into a shared boundedasyncio.Queue(maxsize=1000)[tool_name]--tool(single) still works for backward compatibilityCloses #12
Testing evidence (live, Python 3.12 + mcp, claude + copilot CLIs)
--tools claude,copilotaccepted by CLI--tools invalid_toolrejected with error--tools claude,claude,copilotdeduplicates--tools ,,,(empty) rejected--tool claudebackward compat worksIssues found and fixed during testing
test_daemons.py:TOOL_REGISTRYholds import-time class refs → fixed to patch_create_providers_watch_provider:KeyboardInterruptnot caught (not subclass of Exception) → added to except clause🤖 Generated with Claude Code